端到端模型正在成为误用检测和诊断(MDD)的流行方法。许多实际应用要求的流MDD框架仍然是一个挑战。本文提出了一种名为CCA-MDD的流端到端MDD框架。CCA-MDD支持在线处理,并且能够实时运行。CCA-MDD的编码器包括基于Conv变压器网络的流式声学编码器,并改善了命名的耦合横向(CCA)的改进的横向关注。耦合的横向于预先编码的语言特征集成了编码的声学特征。应用从多任务学习培训的解码器的集合用于最终MDD决策。公开的Corpora实验表明,CCA-MDD可实现可比性的性能,以发布离线端到端MDD模型。
translated by 谷歌翻译
训练有素的语言模型在各种自然语言理解(NLU)任务中取得了巨大的成功,因为它通过在大型语料库上进行预培训来捕获文本中的深层语境化信息。在本技术报告中,我们介绍了我们在中国语料库和中国NLU任务的中国语料库和Fineetuning的培训前培训语言模型的实践。 Nezha的当前版本基于BERT,具有经过验证的改进的集合,包括功能相对位置编码,作为有效的位置编码方案,整个单词掩蔽策略,混合精密训练和羔羊优化器在训练模型中。实验结果表明,尼扎在若干代表性的中国任务上实现了最先进的表演,包括命名实体识别(人民每日NER),句子匹配(LCQMC),中国情绪分类(CHNSENTI)和自然语言推断(xnli)。
translated by 谷歌翻译
Industrial vision anomaly detection plays a critical role in the advanced intelligent manufacturing process, while some limitations still need to be addressed under such a context. First, existing reconstruction-based methods struggle with the identity mapping of trivial shortcuts where the reconstruction error gap is legible between the normal and abnormal samples, leading to inferior detection capabilities. Then, the previous studies mainly concentrated on the convolutional neural network (CNN) models that capture the local semantics of objects and neglect the global context, also resulting in inferior performance. Moreover, existing studies follow the individual learning fashion where the detection models are only capable of one category of the product while the generalizable detection for multiple categories has not been explored. To tackle the above limitations, we proposed a self-induction vision Transformer(SIVT) for unsupervised generalizable multi-category industrial visual anomaly detection and localization. The proposed SIVT first extracts discriminatory features from pre-trained CNN as property descriptors. Then, the self-induction vision Transformer is proposed to reconstruct the extracted features in a self-supervisory fashion, where the auxiliary induction tokens are additionally introduced to induct the semantics of the original signal. Finally, the abnormal properties can be detected using the semantic feature residual difference. We experimented with the SIVT on existing Mvtec AD benchmarks, the results reveal that the proposed method can advance state-of-the-art detection performance with an improvement of 2.8-6.3 in AUROC, and 3.3-7.6 in AP.
translated by 谷歌翻译
相位识别在计算机辅助干预中的手术工作流程分析中起着至关重要的作用。最初建议在自然语言处理中进行顺序数据建模的变压器已成功应用于手术期识别。现有基于变压器的作品主要集中于建模注意力依赖性,而无需引入自动回归。在这项工作中,首先提出了一种自动回归手术变压器(称为ARST),用于腹腔镜视频的在线手术阶段识别,通过条件概率分布隐含地模拟了相之间的相关性。为了减少推理偏差并提高阶段的一致性,我们进一步制定了基于自动回归的一致性约束推理策略。我们对众所周知的公共数据集Cholec80进行全面验证。实验结果表明,我们的方法在定量和定性上都优于最新方法,并达到每秒66帧(FPS)的推理率。
translated by 谷歌翻译
在表面缺陷检测中,由于阳性和负样品数量的极度失衡,基于阳性样本的异常检测方法已受到越来越多的关注。具体而言,基于重建的方法是最受欢迎的方法。但是,退出的方法要么难以修复异常的前景或重建清晰的背景。因此,我们提出了一个清晰的内存调制自动编码器。首先,我们提出了一个新颖的清晰内存调节模块,该模块将编码和内存编码结合在一起,以忘记和输入的方式,从而修复异常的前景和保存透明背景。其次,提出了一般人工异常产生算法来模拟尽可能逼真和特征富含特征的异常。最后,我们提出了一种新型的多量表特征残差检测方法,用于缺陷分割,这使缺陷位置更加准确。 CMA-AE使用五个基准数据集上的11种最先进方法进行比较实验,显示F1量的平均平均改善平均为18.6%。
translated by 谷歌翻译
机翼的几何形状以及相应的飞行条件是空气动力学性能预测的关键因素。在大多数现有方法(例如,几何参数提取,多项式描述和深度学习)中,获得的机翼几何特征在欧几里得空间中。最先进的研究表明,机翼的曲线或表面在里曼尼亚空间中形成了歧管。因此,现有方法提取的特征不足以反映翼型的几何特征。同时,飞行条件和几何特征与不同类型的相关知识非常差异,这两个因素必须评估并学会的最终空气动力学预测,以提高预测准确性。受歧管理论和多任务学习的优势的激励,我们提出了一个基于多种流动的翼型几何形状提取和差异数据融合学习方法(MDF),以提取riemannian空间中的机翼的几何形状(我们称它们为流动) )并进一步将歧管功能与飞行条件融合在一起,以预测空气动力学性能。实验结果表明,与现有方法相比,我们的方法可以更准确地提取机翼的几何特征,而重建机翼的平均MSE降低了56.33%,而同时保持相同的CL的准确度,CD的MSE MSE的准确度MDF预测进一步降低了35.37%。
translated by 谷歌翻译
在视觉检查形式中对纹理表面进行工业检查的最新进展使这种检查成为可能,以实现高效,灵活的制造系统。我们提出了一个无监督的特征内存重排网络(FMR-NET),以同时准确检测各种纹理缺陷。与主流方法一致,我们采用了背景重建的概念。但是,我们创新地利用人工合成缺陷来使模型识别异常,而传统智慧仅依赖于无缺陷的样本。首先,我们采用一个编码模块来获得纹理表面的多尺度特征。随后,提出了一个基于对比的基于学习的内存特征模块(CMFM)来获得判别性表示,并在潜在空间中构建一个正常的特征记忆库,可以用作补丁级别的缺陷和快速异常得分。接下来,提出了一个新型的全球特征重排模块(GFRM),以进一步抑制残余缺陷的重建。最后,一个解码模块利用还原的功能来重建正常的纹理背景。此外,为了提高检查性能,还利用了两阶段的训练策略进行准确的缺陷恢复改进,并且我们利用一种多模式检查方法来实现噪声刺激性缺陷定位。我们通过广泛的实验来验证我们的方法,并通过多级检测方法在协作边缘进行实用的部署 - 云云智能制造方案,表明FMR-NET具有先进的检查准确性,并显示出巨大的使用潜力在启用边缘计算的智能行业中。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译